.. _Tutorial Compute gradients Regression python: Tutorial: compute gradients =========================== The following section will use the test case :std:ref:`Energy consumption test case`. This test case is delivered with the NeurEco installation package. The first step will be to load the data and build a model: .. code-block:: python ''' Load the training data ''' print("Loading the training data".center(60, "*")) x_train = np.genfromtxt("x_train.csv", delimiter=";", skip_header=True) y_train = np.genfromtxt("y_train.csv", delimiter=";", skip_header=True) y_train = np.reshape(y_train, (-1, 1)) ''' create a NeurEco Object to build the model''' print("Creating the NeurEco builder".center(60, "*")) builder = Tabular.Regressor() ''' Building the NeurEco Model ''' builder.build(input_data=x_train, output_data=y_train, # the rest of these parameters are optional write_model_to="./EnergyConsumptionModel/EnergyConsumption.ednn", checkpoint_address="./EnergyConsumptionModel/EnergyConsumption.checkpoint", valid_percentage=33.33, inputs_shifting="min_centered", inputs_scaling="max_centered") ''' Delete the builder from memory''' print("Deleting the NeurEco builder".center(60, "*")) builder.delete() | Once the model is built, we will load it and try the automatic differentiation methods. | (See :std:ref:`Compute gradients Regression python` for the description of these methods.) | The first step is to get the weights of the model: .. code-block:: python model = Tabular.Regressor() model.load("./EnergyConsumptionModel/EnergyConsumption") weights = model.get_weights() print("Weights shape: ", weights.shape) .. code-block:: text Weights shape: (52, 1) The first step is to compute the forward derivatives. To do so, let's assume that the inputs of the model are static (only the weights are "trainable"). let's call d_weights the amount of perturbation applied to the weights array, and let's use the training inputs (x_train) as inputs. To get the value of :math:`\frac{dy}{dw}` we simply need to run the following method: .. code-block:: python d_weights = 1e-2 * np.random.random(weights.shape) dy_dw = model.forward_derivative(w=weights, dw=d_weights, x=x_train, dx=None) print("The first 10 values of dy_dw are: ", dy_dw[:10]) .. code-block:: text The first 10 values of dy_dw are: [[0.6181552 ] [0.57651703] [0.55274752] [0.5574237 ] [0.56446744] [0.55196591] [0.538522 ] [0.50263322] [0.38963915] [0.34190546]] .. note:: In this case, the inputs are static (they are not changing from one iteration to the next), so the value of *dx* was set to None, however if that's not the case, dx will take an array of the same shape as the inputs passed as *x*, and the output will be :math:`\frac{dy}{dw} + \frac{dy}{dx}`. The newt step will be to compute the backward derivative (gradient). Let's call d_ytrain the amount of perturbation applied to the output of the network (this is generally given by the loss function). To get the values of :math:`\frac{dw}{dy}, \frac{dx}{dy}`, we simply run the following method: .. code-block:: python d_ytrain = np.random.random(y_train.shape) dw_dy, dx_dy = model.gradient(w=weights, x=x_train, py=d_ytrain) print("The first 3 elements of dx_dy are:", dx_dy[:3, :]) print("The first 3 elements of dw_dy are:", dw_dy[:3, :]) .. code-block:: text The first 3 elements of dx_dy are: [[ 1.23235363e+00 -1.88618482e-02 6.99468514e-02 7.60385221e-03 -7.81062149e-02] [ 3.95829914e-03 -4.36475779e-05 1.99880853e-04 4.20771088e-05 -1.32884476e-04] [ 3.84179212e-01 -4.24332874e-03 2.57446821e-02 4.69308383e-03 -1.32097589e-02]] The first 3 elements of dw_dy are: [[ 31.70801912] [1218.88311964] [3539.90351476]] At this stage let's suppose that the optimization process has ended, and that we have a new weights array that we want to keep. We have to set the weights of the model to this new array and save the model so when we load it again, the weights are already changed. .. warning:: Setting the weights will change the weights of the model temporarily (as long as the session is running). If the user wishes to change them permanently, he should save the model after the *set_weights* method is called. .. code-block:: python new_weights = weights + d_weights model.set_weights(new_weights) model.save("./EnergyConsumptionModel/EnergyConsumption_NewWeights") if save_status == 0: print("New model saved successfully !!!") else: print("Unable to save the new model.") model.delete() .. code-block:: text New model saved successfully !!!